首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2509篇
  免费   131篇
  国内免费   6篇
电工技术   28篇
综合类   9篇
化学工业   642篇
金属工艺   37篇
机械仪表   83篇
建筑科学   115篇
矿业工程   4篇
能源动力   99篇
轻工业   308篇
水利工程   10篇
石油天然气   9篇
武器工业   1篇
无线电   223篇
一般工业技术   505篇
冶金工业   145篇
原子能技术   19篇
自动化技术   409篇
  2023年   24篇
  2022年   30篇
  2021年   183篇
  2020年   69篇
  2019年   69篇
  2018年   90篇
  2017年   78篇
  2016年   78篇
  2015年   76篇
  2014年   118篇
  2013年   190篇
  2012年   119篇
  2011年   194篇
  2010年   141篇
  2009年   126篇
  2008年   129篇
  2007年   122篇
  2006年   103篇
  2005年   79篇
  2004年   68篇
  2003年   63篇
  2002年   56篇
  2001年   38篇
  2000年   27篇
  1999年   33篇
  1998年   56篇
  1997年   48篇
  1996年   23篇
  1995年   24篇
  1994年   22篇
  1993年   18篇
  1992年   11篇
  1991年   8篇
  1990年   8篇
  1989年   13篇
  1988年   8篇
  1987年   5篇
  1986年   3篇
  1985年   8篇
  1984年   17篇
  1983年   7篇
  1982年   11篇
  1981年   6篇
  1980年   6篇
  1979年   7篇
  1977年   6篇
  1976年   4篇
  1974年   4篇
  1967年   3篇
  1966年   3篇
排序方式: 共有2646条查询结果,搜索用时 15 毫秒
41.
Summary: The success of the use of layered silicates in polymer nanocomposites, to improve physical and chemical properties is strictly related to a deeper knowledge of the mechanistic aspects on which the final features are grounded. This work shows the temperature induced structural rearrangements of nanocomposites based on poly[ethylene‐co‐(vinyl acetate)] (EVA) intercalated‐organomodified clay (at 3–30 wt.‐% silicate addition) which occur in the range between 75 and 350 °C. In situ high temperature X‐ray diffraction (HT‐XRD) studies have been performed under both nitrogen and air to monitor the modifications of the nanocomposite structure at increasing temperatures under inert/oxidative atmosphere. Heating between 75 and 225 °C, under nitrogen or air, causes the layered silicate to migrate towards the nanocomposite surface and to increase its interlayer distance. The degradation of both the clay organomodifier and the VA units of the EVA polymer seems to play a key role in driving the evolution of the silicate phase in the low temperature range. The structural modifications of the nanocomposites in the high temperature range (250–350 °C), depended on the atmosphere, either inert or oxidizing, in which the samples were heated. Heating under nitrogen led to deintercalation and thus a decrease of the silicate interlayer space, whereas exfoliation was the main process under air leading to an increase of the silicate interlayer space.

Heat induced structural modification of EVA‐clay nanocomposite under nitrogen and air.  相似文献   

42.
Digital cameras, new generation phones, commercial TV sets and, in general, all modern devices for image acquisition and visualization can benefit from algorithms for image enhancement suitable to work in real time and preferably with limited power consumption. Among the various methods described in the scientific literature, Retinex-based approaches are able to provide very good performances, but unfortunately they typically require a high computational effort. In this article, we propose a flexible and effective architecture for the real-time enhancement of video frames, suitable to be implemented in a single FPGA device. The video enhancement algorithm is based on a modified version of the Retinex approach. This method, developed to control the dynamic range of poorly illuminated images while preserving the visual details, has been improved by the adoption of a new model to perform illuminance estimation. The video enhancement parameters are controlled in real time through an embedded microprocessor which makes the system able to modify its behavior according to the characteristics of the input images, and using information about the surrounding light conditions.  相似文献   
43.
44.
The evolution of the profile of nanometer sized water drops on a mica surface has been studied through hydration scanning probe microscopy. A time range from a few seconds down to a fraction of millisecond after the formation of the drop has been explored. This high time resolution has been obtained by sampling a series of statistically equivalent drops. This approach also avoids any probe interference during the drop evolution process.  相似文献   
45.
The evolution of the web has outpaced itself: A growing wealth of information and increasingly sophisticated interfaces necessitate automated processing, yet existing automation and data extraction technologies have been overwhelmed by this very growth. To address this trend, we identify four key requirements for web data extraction, automation, and (focused) web crawling: (1) interact with sophisticated web application interfaces, (2) precisely capture the relevant data to be extracted, (3) scale with the number of visited pages, and (4) readily embed into existing web technologies. We introduce OXPath as an extension of XPath for interacting with web applications and extracting data thus revealed—matching all the above requirements. OXPath’s page-at-a-time evaluation guarantees memory use independent of the number of visited pages, yet remains polynomial in time. We experimentally validate the theoretical complexity and demonstrate that OXPath’s resource consumption is dominated by page rendering in the underlying browser. With an extensive study of sublanguages and properties of OXPath, we pinpoint the effect of specific features on evaluation performance. Our experiments show that OXPath outperforms existing commercial and academic data extraction tools by a wide margin.  相似文献   
46.
CPU demand for web serving: Measurement analysis and dynamic estimation   总被引:2,自引:0,他引:2  
Giovanni  Wolfgang  Mike  Asser   《Performance Evaluation》2008,65(6-7):531-553
Managing the resources in a large Web serving system requires knowledge of the resource needs for service requests of various types. In order to investigate the properties of Web traffic and its demand, we collected measurements of throughput and CPU utilization and performed some data analyses. First, we present our findings in relation to the time-varying nature of the traffic, the skewness of traffic intensity among the various types of requests, the correlation among traffic streams, and other system-related phenomena. Then, given such nature of web traffic, we devise and implement an on-line method for the dynamic estimation of CPU demand.

Assessing resource needs is commonly performed using techniques such as off-line profiling, application instrumentation, and kernel-based instrumentation. Little attention has been given to the dynamic estimation of dynamic resource needs, relying only on external and high-level measurements such as overall resource utilization and request rates. We consider the problem of dynamically estimating dynamic CPU demands of multiple kinds of requests using CPU utilization and throughput measurements. We formulate the problem as a multivariate linear regression problem and obtain its basic solution. However, as our measurement data analysis indicates, one is faced with issues such as insignificant flows, collinear flows, space and temporal variations, and background noise. In order to deal with such issues, we present several mechanisms such as data aging, flow rejection, flow combining, noise reduction, and smoothing. We implemented these techniques in a Work Profiler component that we delivered as part of a broader system management product. We present experimental results from using this component in scenarios inspired by real-world usage of that product.  相似文献   

47.
We introduce and study a two-dimensional variational model for the reconstruction of a smooth generic solid shape E, which may handle the self-occlusions and that can be considered as an improvement of the 2.1D sketch of Nitzberg and Mumford (Proceedings of the Third International Conference on Computer Vision, Osaka, 1990). We characterize from the topological viewpoint the apparent contour of E, namely, we characterize those planar graphs that are apparent contours of some shape E. This is the classical problem of recovering a three-dimensional layered shape from its apparent contour, which is of interest in theoretical computer vision. We make use of the so-called Huffman labeling (Machine Intelligence, vol. 6, Am. Elsevier, New York, 1971), see also the papers of Williams (Ph.D. Dissertation, 1994 and Int. J. Comput. Vis. 23:93–108, 1997) and the paper of Karpenko and Hughes (Preprint, 2006) for related results. Moreover, we show that if E and F are two shapes having the same apparent contour, then E and F differ by a global homeomorphism which is strictly increasing on each fiber along the direction of the eye of the observer. These two topological theorems allow to find the domain of the functional ℱ describing the model. Compactness, semicontinuity and relaxation properties of ℱ are then studied, as well as connections of our model with the problem of completion of hidden contours.
Maurizio PaoliniEmail:
  相似文献   
48.
Traditionally, direct marketing companies have relied on pre-testing to select the best offers to send to their audience. Companies systematically dispatch the offers under consideration to a limited sample of potential buyers, rank them with respect to their performance and, based on this ranking, decide which offers to send to the wider population. Though this pre-testing process is simple and widely used, recently the industry has been under increased pressure to further optimize learning, in particular when facing severe time and learning space constraints. The main contribution of the present work is to demonstrate that direct marketing firms can exploit the information on visual content to optimize the learning phase. This paper proposes a two-phase learning strategy based on a cascade of regression methods that takes advantage of the visual and text features to improve and accelerate the learning process. Experiments in the domain of a commercial Multimedia Messaging Service (MMS) show the effectiveness of the proposed methods and a significant improvement over traditional learning techniques. The proposed approach can be used in any multimedia direct marketing domain in which offers comprise both a visual and text component.
Giuseppe TribulatoEmail:

Sebastiano Battiato   was born in Catania, Italy, in 1972. He received the degree in Computer Science (summa cum laude) in 1995 and his Ph.D in Computer Science and Applied Mathematics in 1999. From 1999 to 2003 he has lead the “Imaging” team c/o STMicroelectronics in Catania. Since 2004 he works as a Researcher at Department of Mathematics and Computer Science of the University of Catania. His research interests include image enhancement and processing, image coding and camera imaging technology. He published more than 90 papers in international journals, conference proceedings and book chapters. He is co-inventor of about 15 international patents. He is reviewer for several international journals and he has been regularly a member of numerous international conference committees. He has participated in many international and national research projects. He is an Associate Editor of the SPIE Journal of Electronic Imaging (Specialty: digital photography and image compression). He is director of ICVSS (International Computer Vision Summer School). He is a Senior Member of the IEEE. Giovanni Maria Farinella   is currently contract researcher at Dipartimento di Matematica e Informatica, University of Catania, Italy (IPLAB research group). He is also associate member of the Computer Vision and Robotics Research Group at University of Cambridge since 2006. His research interests lie in the fields of computer vision, pattern recognition and machine learning. In 2004 he received his degree in Computer Science (egregia cum laude) from University of Catania. He was awarded a Ph.D. (Computer Vision) from the University of Catania in 2008. He has co-authored several papers in international journals and conferences proceedings. He also serves as reviewer numerous international journals and conferences. He is currently the co-director of the International Summer School on Computer Vision (ICVSS). Giovanni Giuffrida   is an assistant professor at University of Catania, Italy. He received a degree in Computer Science from the University of Pisa, Italy in 1988 (summa cum laude), a Master of Science in Computer Science from the University of Houston, Texas, in 1992, and a Ph.D. in Computer Science, from the University of California in Los Angeles (UCLA) in 2001. He has an extensive experience in both the industrial and academic world. He served as CTO and CEO in the industry and served as consultant for various organizations. His research interest is on optimizing content delivery on new media such as Internet, mobile phones, and digital tv. He published several papers on data mining and its applications. He is a member of ACM and IEEE. Catarina Sismeiro   is a senior lecturer at Imperial College Business School, Imperial College London. She received her Ph.D. in Marketing from the University of California, Los Angeles, and her Licenciatura in Management from the University of Porto, Portugal. Before joining Imperial College Catarina had been and assistant professor at Marshall School of Business, University of Southern California. Her primary research interests include studying pharmaceutical markets, modeling consumer behavior in interactive environments, and modeling spatial dependencies. Other areas of interest are decision theory, econometric methods, and the use of image and text features to predict the effectiveness of marketing communications tools. Catarina’s work has appeared in innumerous marketing and management science conferences. Her research has also been published in the Journal of Marketing Research, Management Science, Marketing Letters, Journal of Interactive Marketing, and International Journal of Research in Marketing. She received the 2003 Paul Green Award and was the finalist of the 2007 and 2008 O’Dell Awards. Catarina was also a 2007 Marketing Science Institute Young Scholar, and she received the D. Antonia Adelaide Ferreira award and the ADMES/MARKTEST award for scientific excellence. Catarina is currently on the editorial boards of the Marketing Science journal and the International Journal of Research in Marketing. Giuseppe Tribulato   was born in Messina, Italy, in 1979. He received the degree in Computer Science (summa cum laude) in 2004 and his Ph.D in Computer Science in 2008. From 2005 he has lead the research team at Neodata Group. His research interests include data mining techniques, recommendation systems and customer targeting.   相似文献   
49.
Degree-Optimal Routing for P2P Systems   总被引:1,自引:0,他引:1  
We define a family of Distributed Hash Table systems whose aim is to combine the routing efficiency of randomized networks—e.g. optimal average path length O(log 2 n/δlog δ) with δ degree—with the programmability and startup efficiency of a uniform overlay—that is, a deterministic system in which the overlay network is transitive and greedy routing is optimal. It is known that Ω(log n) is a lower bound on the average path length for uniform overlays with O(log n) degree (Xu et al., IEEE J. Sel. Areas Commun. 22(1), 151–163, 2004). Our work is inspired by neighbor-of-neighbor (NoN) routing, a recently introduced variation of greedy routing that allows us to achieve optimal average path length in randomized networks. The advantage of our proposal is that of allowing the NoN technique to be implemented without adding any overhead to the corresponding deterministic network. We propose a family of networks parameterized with a positive integer c which measures the amount of randomness that is used. By varying the value c, the system goes from the deterministic case (c=1) to an “almost uniform” system. Increasing c to relatively low values allows for routing with asymptotically optimal average path length while retaining most of the advantages of a uniform system, such as easy programmability and quick bootstrap of the nodes entering the system. We also provide a matching lower bound for the average path length of the routing schemes for any c. This work was partially supported by the Italian FIRB project “WEB-MINDS” (Wide-scalE, Broadband MIddleware for Network Distributed Services), .  相似文献   
50.
High resolution electron and ion beam lithographies, fundamental tools for nanofabrication and nanotechnologies, require fast and high precision (high bit number) pattern generators. In the present work a solution for increasing the bit number, and preserving the speed of the system, is presented. A prototype with effective 18 bit resolution and with a write speed as fast as 10 MHz has been successfully tested: details of the adopted hardware solution are presented and described. This solution is very general and can be used in all those applications that require the generation of control voltages with an high bit number (high precision) at high speed, such as, for example, the scanning probe microscopy and nanomanipulation. Software solutions for increasing the data transfer efficiency are also presented; the aim of the adopted solutions is to preserve the flexibility and adaptability of the pattern generator to different writing strategies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号